翻訳と辞書
Words near each other
・ Algorithme Pharma
・ Algorithmic art
・ Algorithmic complexity
・ Algorithmic complexity attack
・ Algorithmic composition
・ Algorithmic cooling
・ Algorithmic efficiency
・ Algorithmic game theory
・ Algorithmic inference
・ Algorithmic information theory
・ Algorithmic learning theory
・ Algorithmic logic
・ Algorithmic Lovász local lemma
・ Algorithmic mechanism design
・ Algorithmic Number Theory Symposium
Algorithmic probability
・ Algorithmic program debugging
・ Algorithmic regulation
・ Algorithmic skeleton
・ Algorithmic state machine
・ Algorithmic trading
・ Algorithmic version for Szemerédi regularity partition
・ Algorithmica
・ Algorithmically random sequence
・ Algorithmics
・ Algorithmics Inc.
・ Algorithms (journal)
・ Algorithms + Data Structures = Programs
・ Algorithms for calculating variance
・ Algorithms for Recovery and Isolation Exploiting Semantics


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Algorithmic probability : ウィキペディア英語版
Algorithmic probability
In algorithmic information theory, algorithmic (Solomonoff) probability is a mathematical method of assigning a prior probability to a given observation. In a theoretic sense, the prior is universal. It is used in inductive inference theory, and analyses of algorithms. Since it is not computable, it must be approximated.〔Hutter, M., Legg, S., and Vitanyi, P., ("Algorithmic Probability" ), Scholarpedia, 2(8):2572, 2007.〕
It deals with the questions: Given a body of data about some phenomenon that one wants to understand, how can one select the most probable hypothesis of how it was caused from among all possible hypotheses, how can one evaluate the different hypotheses, and how can one predict future data?
Algorithmic probability combines several ideas: Occam's razor; Epicurus' principle of multiple explanations; and special coding methods from modern computing theory. The prior obtained from the formula is used in Bayes rule for prediction.〔Li, M. and Vitanyi, P., ''An Introduction to Kolmogorov Complexity and Its Applications'', 3rd Edition, Springer Science and Business Media, N.Y., 2008, p 347〕
Occam's razor means 'among the theories that are consistent with the observed phenomena, one should select the simplest theory'.〔ibid, p. 341〕
In contrast, Epicurus had proposed the Principle of Multiple Explanations: if more than one theory is consistent with the observations, keep all such theories.〔ibid, p. 339.〕
A special mathematical object called a universal Turing machine is used to compute, quantify and assign codes to all quantities of interest.〔Hutter, M., ("Algorithmic Information Theory" ), Scholarpedia, 2(3):2519.〕 The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability.
Algorithmic probability combines Occam's razor and the principle of multiple explanations by giving a probability value to each hypothesis (algorithm or program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses (longer programs) receiving increasingly small probabilities. These probabilities form a prior probability distribution for the observation, which Ray Solomonoff proved to be machine-invariant within a constant factor (called the invariance theorem) and can be used with Bayes' theorem to predict the most likely continuation of that observation. A universal Turing machine is used for the computer operations.
Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960,〔Solomonoff, R., ("The Discovery of Algorithmic Probability" ), ''Journal of Computer and System Sciences'', Vol. 55, No. 1, pp. 73-88, August 1997.〕 publishing a report on it: "A Preliminary Report on a General Theory of Inductive Inference."〔Solomonoff, R., "(A Preliminary Report on a General Theory of Inductive Inference )", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).〕 He clarified these ideas more fully in 1964 with "A Formal Theory of Inductive Inference," Part I〔Solomonoff, R., "(A Formal Theory of Inductive Inference, Part I )". ''Information and Control'', Vol 7, No. 1 pp 1-22, March 1964.〕 and Part II.〔Solomonoff, R., "(A Formal Theory of Inductive Inference, Part II )" ''Information and Control'', Vol 7, No. 2 pp 224–254, June 1964.〕
He described a universal computer with a randomly generated input program. The program computes some possibly infinite output. The universal probability distribution is the probability distribution on all possible output strings with random input.〔Solomonoff, R., "(The Kolmogorov Lecture: The Universal Distribution and Machine Learning )" ''The Computer Journal'', Vol 46, No. 6 p 598, 2003.〕
The algorithmic probability of any given finite output prefix ''q'' is the sum of the probabilities of the programs that compute something starting with ''q''. Certain long objects with short programs have high probability.
Algorithmic probability is the main ingredient of Solomonoff's theory of inductive inference, the theory of prediction based on observations; it was invented with the goal of using it for machine learning; given a sequence of symbols, which one will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Unlike, for example, Karl Popper's informal inductive inference theory, Solomonoff's is mathematically rigorous.
Algorithmic probability is closely related to the concept of Kolmogorov complexity. Kolmogorov's introduction of complexity, was motivated by information theory and problems in randomness while Solomonoff introduced algorithmic complexity for a different reason: inductive reasoning. A single universal prior probability that can be substituted for each actual prior probability in Bayes’s rule was invented by Solomonoff with Kolmogorov complexity as a side product.〔Gács, P. and Vitányi, P., "In Memoriam Raymond J. Solomonoff", ''IEEE Information Theory Society Newsletter'', Vol. 61, No. 1, March 2011, p 11. 〕
Solomonoff's enumerable measure is universal in a certain powerful sense, but the computation time can be infinite. One way of dealing with this is a variant of Leonid Levin's Search Algorithm,〔Levin, L.A., "Universal Search Problems", in Problemy Peredaci Informacii 9, pp. 115–116, 1973〕 which limits the time spent computing the success of possible programs, with shorter programs given more time. Other methods of limiting the search space include training sequences.
==Key people==

* Ray Solomonoff
* Andrey Kolmogorov

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Algorithmic probability」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.